Bounding Cache-Related Preemption Delay for Real-Time Systems
نویسندگان
چکیده
ÐCache memory is used in almost all computer systems today to bridge the ever increasing speed gap between the processor and main memory. However, its use in multitasking computer systems introduces additional preemption delay due to the reloading of memory blocks that are replaced during preemption. This cache-related preemption delay poses a serious problem in real-time computing systems where predictability is of utmost importance. In this paper, we propose an enhanced technique for analyzing and thus bounding the cache-related preemption delay in fixed-priority preemptive scheduling focusing on instruction caching. The proposed technique improves upon previous techniques in two important ways. First, the technique takes into account the relationship between a preempted task and the set of tasks that execute during the preemption when calculating the cache-related preemption delay. Second, the technique considers the phasing of tasks to eliminate many infeasible task interactions. These two features are expressed as constraints of a linear programming problem whose solution gives a guaranteed upper bound on the cache-related preemption delay. This paper also compares the proposed technique with previous techniques using randomly generated task sets. The results show that the improvement on the worst-case response time prediction by the proposed technique over previous techniques ranges between 5 percent and 18 percent depending on the cache refill time when the task set utilization is 0.6. The results also show that as the cache refill time increases, the improvement increases, which indicates that accurate prediction of cache-related preemption delay by the proposed technique becomes increasingly important if the current trend of widening speed gap between the processor and main memory continues.
منابع مشابه
Enhanced Analysis of Cache - related Preemption Delayin Fixed - priority Preemptive
Cache memory is used in almost all computer systems today to bridge the ever increasing speed gap between the processor and main memory. However, its use in multitasking computer systems introduces additional preemption delay due to reloading of memory blocks that were replaced during preemption. This cache-related preemption delay poses a serious problem in real-time computing systems where pr...
متن کاملILP-Based Program Path Analysis for Bounding Worst-Case Inter-Task Cache Conflicts
SUMMARY The unpredictable behavior of cache memory makes it difficult to statically analyze the worst-case performance of real-time systems. This problem is further exacerbated in the case of preemptive multitask systems because of inter-task cache interference, called Cache-Related Pre-emption Delay (CRPD). This paper proposes an approach to analyzing the tight upper bound on CRPD which a task...
متن کاملTightening the Bounds on Cache-Related Preemption Delay in Fixed Preemption Point Scheduling
Limited Preemptive Fixed Preemption Point scheduling (LP-FPP) has the ability to decrease and control the preemption-related overheads in the real-time task systems, compared to other limited or fully preemptive scheduling approaches. However, existing methods for computing the preemption overheads in LP-FPP systems rely on over-approximation of the evicting cache blocks (ECB) calculations, pot...
متن کاملIntegration of Cache Related Preemption Delay Analysis in Priority Assignment Algorithm
Handling cache related preemption delay (CRPD) in a preemptive scheduling context for real-time systems stays an open subject despite of its practical importance. Priority assignment algorithms and feasibility tests are usually based on the assumption that preemption cost is negligible. Then, a system that could be schedulable at design time can fail to meet its timing constraints in practice d...
متن کاملAnalysis of cache-related preemption delay in fixed-priority preemptive scheduling
We propose a technique for analyzing cache-related preemption delays of tasks that cause unpredictable variation in task execution time in the context of fixed-priority preemptive scheduling. The proposed technique consists of two steps. The first step performs a per-task analysis to estimate cache-related preemption cost for each execution point in a given task from the number of useful cache ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- IEEE Trans. Software Eng.
دوره 27 شماره
صفحات -
تاریخ انتشار 2001